perm filename CHERNI.RV1[F78,JMC] blob
sn#402962 filedate 1978-12-11 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00003 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 V. Cherniavsky and E. Konrad in %2On Limitations for Algorithmic Modelling
C00007 00003 THE ROSENBLOOM FALLACY
C00012 ENDMK
C⊗;
V. Cherniavsky and E. Konrad in %2On Limitations for Algorithmic Modelling
of Intelligent Activities%1 claim as follows:
"The purpose of this paper is to show that natural boundaries separating
intelligence mechanisms of any formalism from the one of the
human apply not only to truth behavior but also to such intelligent
activities as syntax analysis, understanding, learning, memory processing,
and heuristics."
Before actually reading their paper, we conjecture as follows:
They have a diagonal proof of the non-existence of a Turing machine
that will be uniformly successful in doing some class %2A%1 of tasks. They
note that humans often do such tasks successfully, and conclude that
therefore humans have a capability that machines cannot posess.
The error is that they don't and most likely can't show that
a human can do all tasks in the class %2A%1. It may be that for
some subclass %2A' ⊂ A%1, there is a Turing machine that does the tasks
and that all tasks that humans actually do are confined to %2A'%1.
Perhaps they also show that given any Turing machine that
does some class %2A' ⊂ A%1, they can construct a task that the machine
can't do and they can. They then conclude that the man is better
than machine. This is a somewhat more subtle error. The process of
constructing from a class %2A'%1 of tasks doable by machine, a task
it can't do is itself mechanizable and doable by machine. It gives
rise to an improved machine that can do a class %2A'' ⊃ A'%1 of tasks.
We can imagine a dialog between two machines. One says to the
other, "Describe yourself, and I'll give a task you can't do and I
can". The second machine says, "You describe yourself first". The
situation is analogous to one person saying to another, "I can name
a bigger number than you can. You go first".
Of course, he intends to add one to the other person's number.
Actually we can do better than mechanize a process for
generating an improved machine. We can improve the %2A%1 to
"by %Aw%1" by generating a machine which tests whether the input
is in any finite iteration of %2A%1.
How far can this be carried? The answer involves the
technicalities of recursive function theory. Namely, if %Aa%1 is
any recursive ordinal number, there is an improvement "by %Aa%1".
Since the limit of the recursive ordinals is not a recursive
ordinal, a machine can't do better than that by this method.
Neither, presumably, can a human.
We should perhaps call this complex of errors
the Rosenbloom fallacy, since its first appearance may have
been in P.C. Rosenbloom's little book on mathematical logic.
THE ROSENBLOOM FALLACY
We shall discuss a certain common fallacious argument that
begins with Goedel's theorems or undecideability theorems and
concludes that there are essential differences between human and
machine intelligence.
Let ⊗P be a class of problems. A standard mathematical example
is whether an arithmetic sentence ⊗s is true in the standard model of
arithmetic. The set of all sentences of arithmetic can be enumerated,
and the problem of arithmetic truth can be expressed as whether a ⊗s
belongs to the subclass ⊗A of true sentences. Church's theorem says
that ⊗A is not recursively enumerable, and this shows that no
program on a current computer can always decide whether a sentence
is true. Because all candidates for a definition of computability
turn out to be equivalent, hardly anyone seriously hopes to get around
the undecideability of arithmetic truth by inventing some new kind
of machine. All current work in artificial intelligence is based on
programs for machines to which Church's theorem is applicable.
First of all, we need to be clear about what Church's theorem
tells us. It tells us that there is no program which will always
decide whether an arbitrary sentence of arithmetic is true. It doesn't
prohibit a program that decides a subclass of arithmetic sentences.
We can even have a program that will "try" any sentence of arithmetic,
will always be correct when it gives an answer but which give up
or run on indefinitely for problems that are too hard for it.
But this is all we can claim for humans. No-one says that any
particular human can decide all sentences of arithmetic.
However, there is more to the argument. The proof of Church's
theorem is constructive in that given a correct program, it will construct
a true sentence of arithmetic that this program will not decide.
From this it is commonly and mistakenly concluded that humans are
superior to machines, since if you give me a machine, I will construct
a sentence that I know is true and it doesn't.
I want to show that this argument is unfair to the machines.
In fact it is as though I were to say to you, "Let's have a contest
to see who can name the largest number. You go first, and I'll add
one to the number you name". Indeed the machine could be programmed
to type, "Describe how this human works, and I'll construct a true
sentence of arithmetic that he'll never know is true". Since the
construction of the undecided sentence is constructive, it can
be programmed to make good on this boast.